Goto

Collaborating Authors

 terrain model


Stereovision Image Processing for Planetary Navigation Maps with Semi-Global Matching and Superpixel Segmentation

Lu, Yan-Shan, Arana-Catania, Miguel, Upadhyay, Saurabh, Felicetti, Leonard

arXiv.org Artificial Intelligence

Mars exploration requires precise and reliable terrain models to ensure safe rover navigation across its unpredictable and often hazardous landscapes. Stereoscopic vision serves a critical role in the rover's perception, allowing scene reconstruction by generating precise depth maps through stereo matching. State-of-the-art Martian planetary exploration uses traditional local block-matching, aggregates cost over square windows, and refines disparities via smoothness constraints. However, this method often struggles with low-texture images, occlusion, and repetitive patterns because it considers only limited neighbouring pixels and lacks a wider understanding of scene context. This paper uses Semi-Global Matching (SGM) with superpixel-based refinement to mitigate the inherent block artefacts and recover lost details. The approach balances the efficiency and accuracy of SGM and adds context-aware segmentation to support more coherent depth inference. The proposed method has been evaluated in three datasets with successful results: In a Mars analogue, the terrain maps obtained show improved structural consistency, particularly in sloped or occlusion-prone regions. Large gaps behind rocks, which are common in raw disparity outputs, are reduced, and surface details like small rocks and edges are captured more accurately. Another two datasets, evaluated to test the method's general robustness and adaptability, show more precise disparity maps and more consistent terrain models, better suited for the demands of autonomous navigation on Mars, and competitive accuracy across both non-occluded and full-image error metrics. This paper outlines the entire terrain modelling process, from finding corresponding features to generating the final 2D navigation maps, offering a complete pipeline suitable for integration in future planetary exploration missions.


Validating Terrain Models in Digital Twins for Trustworthy sUAS Operations

Bernal, Arturo Miguel Russell, Petterson, Maureen, Granadeno, Pedro Antonio Alarcon, Murphy, Michael, Mason, James, Cleland-Huang, Jane

arXiv.org Artificial Intelligence

--With the increasing deployment of small Unmanned Aircraft Systems (sUAS) in unfamiliar and complex environments, Environmental Digital Twins (EDT) that comprise weather, airspace, and terrain data are critical for safe flight planning and for maintaining appropriate altitudes during search and surveillance operations. With the expansion of sUAS capabilities through edge and cloud computing, accurate EDT are also vital for advanced sUAS capabilities, like geolocation. However, real-world sUAS deployment introduces significant sources of uncertainty, necessitating a robust validation process for EDT components. This paper focuses on the validation of terrain models, one of the key components of an EDT, for real-world sUAS tasks. These models are constructed by fusing U.S. Geological Survey (USGS) datasets and satellite imagery, incorporating high-resolution environmental data to support mission tasks. V alidating both the terrain models and their operational use by sUAS under real-world conditions presents significant challenges, including limited data granularity, terrain discontinuities, GPS and sensor inaccuracies, visual detection uncertainties, as well as onboard resources and timing constraints. We propose a 3-Dimensions validation process grounded in software engineering principles, following a workflow across granularity of tests, simulation to real world, and the analysis of simple to edge conditions. We demonstrate our approach using a multi-sUAS platform equipped with a T errain-A ware Digital Shadow. As swarms of small Unmanned Aircraft Systems (sUAS) are increasingly deployed in complex, unstructured environments such as disaster zones, wilderness areas, and wildfire regions, the need for accurate environmental models becomes critical. Effective sUAS mission planning requires awareness not only of dynamic airspace and weather conditions but also of the underlying terrain. In such settings, terrain is often the dominant factor influencing flight safety, sensor placement, line-of-sight communications, and search effectiveness. This paper focuses specifically on the role of terrain models that enable mission-level decision-making and flight planning for sUAS operations. However, terrain inaccuracies or blind spots, such as missing elevation data, undetected peaks, or mismatched georeferencing, can result in ineffective or even hazardous behavior by autonomous vehicles. To minimize these issues, we construct and maintain a terrain model by fusing multiple sources of environmental data, including public USGS datasets [1], [2], and satellite imagery [3].


A Novel Methodology for Autonomous Planetary Exploration Using Multi-Robot Teams

Swinton, Sarah, Ewers, Jan-Hendrik, McGookin, Euan, Anderson, David, Thomson, Douglas

arXiv.org Artificial Intelligence

One of the fundamental limiting factors in planetary exploration is the autonomous capabilities of planetary exploration rovers. This study proposes a novel methodology for trustworthy autonomous multi-robot teams which incorporates data from multiple sources (HiRISE orbiter imaging, probability distribution maps, and on-board rover sensors) to find efficient exploration routes in Jezero crater. A map is generated, consisting of a 3D terrain model, traversability analysis, and probability distribution map of points of scientific interest. A three-stage mission planner generates an efficient route, which maximises the accumulated probability of identifying points of interest. A 4D RRT* algorithm is used to determine smooth, flat paths, and prioritised planning is used to coordinate a safe set of paths. The above methodology is shown to coordinate safe and efficient rover paths, which ensure the rovers remain within their nominal pitch and roll limits throughout operation.


Examining the simulation-to-reality gap of a wheel loader digging in deformable terrain

Aoshima, Koji, Servin, Martin

arXiv.org Artificial Intelligence

We investigate how well a physics-based simulator can replicate a real wheel loader performing bucket filling in a pile of soil. The comparison is made using field test time series of the vehicle motion and actuation forces, loaded mass, and total work. The vehicle was modeled as a rigid multibody system with frictional contacts, driveline, and linear actuators. For the soil, we tested discrete element models of different resolutions, with and without multiscale acceleration. The spatio-temporal resolution ranged between 50-400 mm and 2-500 ms, and the computational speed was between 1/10,000 to 5 times faster than real-time. The simulation-to-reality gap was found to be around 10% and exhibited a weak dependence on the level of fidelity, e.g., compatible with real-time simulation. Furthermore, the sensitivity of an optimized force feedback controller under transfer between different simulation domains was investigated. The domain bias was observed to cause a performance reduction of 5% despite the domain gap being about 15%.


Contrastive Label Disambiguation for Self-Supervised Terrain Traversability Learning in Off-Road Environments

Xue, Hanzhang, Hu, Xiaochang, Xie, Rui, Fu, Hao, Xiao, Liang, Nie, Yiming, Dai, Bin

arXiv.org Artificial Intelligence

Discriminating the traversability of terrains is a crucial task for autonomous driving in off-road environments. However, it is challenging due to the diverse, ambiguous, and platform-specific nature of off-road traversability. In this paper, we propose a novel self-supervised terrain traversability learning framework, utilizing a contrastive label disambiguation mechanism. Firstly, weakly labeled training samples with pseudo labels are automatically generated by projecting actual driving experiences onto the terrain models constructed in real time. Subsequently, a prototype-based contrastive representation learning method is designed to learn distinguishable embeddings, facilitating the self-supervised updating of those pseudo labels. As the iterative interaction between representation learning and pseudo label updating, the ambiguities in those pseudo labels are gradually eliminated, enabling the learning of platform-specific and task-specific traversability without any human-provided annotations. Experimental results on the RELLIS-3D dataset and our Gobi Desert driving dataset demonstrate the effectiveness of the proposed method.


Traversability Analysis for Autonomous Driving in Complex Environment: A LiDAR-based Terrain Modeling Approach

Xue, Hanzhang, Fu, Hao, Xiao, Liang, Fan, Yiming, Zhao, Dawei, Dai, Bin

arXiv.org Artificial Intelligence

For autonomous driving, traversability analysis is one of the most basic and essential tasks. In this paper, we propose a novel LiDAR-based terrain modeling approach, which could output stable, complete and accurate terrain models and traversability analysis results. As terrain is an inherent property of the environment that does not change with different view angles, our approach adopts a multi-frame information fusion strategy for terrain modeling. Specifically, a normal distributions transform mapping approach is adopted to accurately model the terrain by fusing information from consecutive LiDAR frames. Then the spatial-temporal Bayesian generalized kernel inference and bilateral filtering are utilized to promote the stability and completeness of the results while simultaneously retaining the sharp terrain edges. Based on the terrain modeling results, the traversability of each region is obtained by performing geometric connectivity analysis between neighboring terrain regions. Experimental results show that the proposed method could run in real-time and outperforms state-of-the-art approaches.